Sparse trace norm regularization
نویسندگان
چکیده
منابع مشابه
Sparse Trace Norm Regularization
We study the problem of estimating multiple predictive functions from a dictionary of basis functions in the nonparametric regression setting. Our estimation scheme assumes that each predictive function can be estimated in the form of a linear combination of the basis functions. By assuming that the coefficient matrix admits a sparse low-rank structure, we formulate the function estimation prob...
متن کاملOn Trace Norm Regularization for Document Modeling
We propose a model for text. At the lowest level, we use a simple language model which (at best) can only capture local structure. However, the model allows for these lowest-level components to be structured so that higher-level ordering and grouping which is present in the data may be reflected in the model. 1 The Data We assume that the text data is made up of a set of units. We represent the...
متن کاملTrace Lasso: a trace norm regularization for correlated designs
Using the `1-norm to regularize the estimation of the parameter vector of a linear model leads to an unstable estimator when covariates are highly correlated. In this paper, we introduce a new penalty function which takes into account the correlation of the design matrix to stabilize the estimation. This norm, called the trace Lasso, uses the trace norm of the selected covariates, which is a co...
متن کاملSparse Portfolio Selection via Quasi-Norm Regularization
to obtain an approximate second-order KKT solution of the `p-norm models in polynomial time with a fixed error tolerance, and then test our `p-norm models on CRSP(1992-2013) and also S&P 500 (2008-2012) data. The empirical results illustrate that our `p-norm regularized models can generate portfolios of any desired sparsity with portfolio variance, portfolio return and Sharpe Ratio comparable t...
متن کاملLifted coordinate descent for learning with trace-norm regularization
We consider the minimization of a smooth loss with trace-norm regularization, which is a natural objective in multi-class and multitask learning. Even though the problem is convex, existing approaches rely on optimizing a non-convex variational bound, which is not guaranteed to converge, or repeatedly perform singular-value decomposition, which prevents scaling beyond moderate matrix sizes. We ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computational Statistics
سال: 2013
ISSN: 0943-4062,1613-9658
DOI: 10.1007/s00180-013-0440-7